|
![](/i/fill.gif) |
In article <web.41bf577e1db2d3fb765651f90@news.povray.org> , "Paris"
<par### [at] lycos com> wrote:
> Pov-Ray is lagging farther and farther behind commercial rendering software
> in terms of photo-realism. There are many reasons why this is the case,
> but some reasons are more evident and more easily solved than others.
I will give you the benefit of the doubt and assume you are just clueless
and not a troll.
> 1. The phong model is outdated now. The next release of pov-ray should
> use physically-based BRDFs, and only keep the phong model around for
> compatability. Phong makes everything look like plastic, including what we
> have been calling "glass". The difference between pov-ray glass and
> physically-based glass in other packages is STRIKING. Real glass has a
> fresnel effect, where shallow-angled light reflected from the surface tends
> more towards a perfect mirror.
I would suggest to RTFM before making false statements about available
features in POV-Ray...
<http://www.povray.org/documentation/view/3.6.1/348/>
> 2. Pov-ray does not have hair, fuzz, fur, or suede textures. Brushed metal
> would be nice also. And car paint too... I won't ask for
> subsurfacescattered flesh just yet, it tends to be very time-consuming to
> implement. There are many textures out there that can only be implemented
> using path-tracing techniques, such as very shiney, partially-reflective
> gold. A few others are glossy reflections (blurred reflections) and
> frosted glass.
Again, maybe you should RTFM. And brushed metal, well, can it get more
brushed than
<http://news.povray.org/povray.binaries.images/thread/%3Cweb.419777b7897a5e5
4b0aac12c0@news.povray.org%3E/>?
Apart from that, you do realise that hair and fur are not textures, right?
And car paint, how did you get the idea POV-Ray cannot render something as
trivial as car paint, which is hardly anything else but paint?
> 3. Povray uses distributed ray-tracing to simulate global illumination.
Distributed ray-tracing is a supersampling technique (antialiasing), so it
has nothing to do with global illumination.
> Reflected parts of the scene do not have the radiosity calculation
> performed on them. I have found this to be highly frustrating. (Simply
> create a radiosity room and stick a large reflecting ball in the middle to
> see what I mean.) The speed-ups to this method leave even the most
> advanced users scratching their heads. Ive been using pov-ray since it was
> named DKB-Trace, and honestly, I'm still not sure that "minimum_reuse"
> means under the radiosity settings.
Clearly you have not, otherwise you would know that POV-Ray's radiosity was
added in POV-Ray 3.0 many years after DKB-Trace. And if you cannot use it,
well, it certainly ain't trivial but that _you_ cannot use it hardly says
anything about POV-Ray's quality but only about _your_ abilities. And given
your other statements so far, I have serious doubts in those...
> If you think about what distributed
> ray-tracing does, you will notice it works by tracing rays into DARK PARTS
> of the scene, hoping for a swath of light. It doesnt take a professor to
> realize that this is a wasted calculation. Tracing a ray into a dark part
> of the scene will mathematically never make a difference in the shaded
> pixel.
No clue probably defines your statement best, POV-Ray "radiosity" is a Monte
Carlo ray-tracing technique, so how you got the idea that it has anything to
do with sampling dark areas of a scene is beyond me. The algorithm
implemented is based on "A Ray Tracing Solution for Diffuse Interreflection"
by Ward, Rubinstein, and Clear in the Siggraph 1988 Proceedings.
> 4. There are other physically-based methods out there that turn ray-tracing
> on its head.
Ah, you mean the "real-world"? Sure, that will always be better than
ray-tracing.
> I pretty much expect future versions of Pov-ray to move away
> from the phong model (part 1) and implement a few BRDFs for popular
> surfaces, but other methods would be nice to see also, which I have less
> faith in. There are ways to calculate light in rendering in which you do
> not even use RGB color space. These algorithms use spectral integration,
> and create a large picture out of pixels that are colored with the
> SPECTRUM, rather than RGB triples.
You do realise the cost of using a non-RGB implementation? There is a
reason why there are special programs that allow exact simulation of the
wave effects of light. Unless you need it, POV-Ray can simulate the effect
just like all "professional" rendering software does. I guess in all your
cluelessness you are talking about dispersion, so maybe RTFM
<http://www.povray.org/documentation/view/3.6.1/415/>.
> 5. Even without spectral integration, you can render in RGB space and still
> do EXPOSURE simulations. (Usually its the case that exposure simulation is
> not used unless a certain amount of "energy" is calculated to be passing
> through the cameras' aperture, but it can be done ad hoc in RGB space also,
> by fanagling.) This basically works by storing floating point triples into
> each pixel, none of which are CLIPPED or "tuned down" to fit into 0.0 -->
> 1.0.
I believe you want HDR image output. There is a not a standard format for
that, but your talk about floating-point RGB values clearly shows you know
exactly nothing about what you are talking about. And no, I will not bother
to explain.
Thorsten
____________________________________________________
Thorsten Froehlich, Duisburg, Germany
e-mail: tho### [at] trf de
Visit POV-Ray on the web: http://mac.povray.org
Post a reply to this message
|
![](/i/fill.gif) |